Extending Your Cloud Deployments to Edge

Amit Sharma
Engineered @ Publicis Sapient
8 min readJul 22, 2022

--

Introduction

There may be scenarios in edge computing where the business is not worried so much about data collection and the subsequent analysis through edge devices, but rather their ability to deploy applications to their edge locations and keep them in synchronization.
Some companies may not have a distributed computing infrastructure yet, and are looking at options to transition to one. While other companies may already have hundreds of edge servers located in their storefronts or warehouses across the country (or even the world), seeking a way to keep the applications that are running on these servers up to date to ensure their business needs are met.

These businesses are looking for solutions to expand their ability to deploy to edge seamlessly and, when possible, integrate this extension to edge with their existing cloud-based deployments.
This blog caters exactly to that need of deploying and extending to the edge using edge computing on Kubernetes with an example of AWS Outpost and EKS below.

Common problems of software updates at in-store locations

Without the use of edge computing, updating and managing software on the customer premises can be a long and tedious process. Depending on the number of stores, it could take months to update all of the systems, as some of these companies rely on sending people to manually update the software on-site. The disparity between the time that the first store gets updated to the last can also lead to problems as the applications become out of sync. This applies to bug fixes as well, where small issues that need to be fixed can compound with some locations being multiple updates behind. With a distributed edge solution, businesses can manage their edge environments from a single control plane, even though the infrastructure is distributed across multiple edges. The servers sitting in their locations get the same operational consistency as the Cloud-based Kubernetes clusters. Changes made to the Cloud-based clusters will be propagated to all edge location clusters in mere minutes.

Edge Computing with Kubernetes for Centralized management of disparate infrastructure

The features built into Kubernetes make the platform well-suited for a variety of use cases. You can use Kubernetes to manage any on-premises or cloud-based environment. It can be used in a multi-cloud context, where Kubernetes becomes the central control plane for multiple clouds; alternatively, in a hybrid context, it can manage both cloud-based and on-prem infrastructure at the same time.

  1. It can manage multiple types of infrastructure at the same time. That makes Kubernetes ideal for an edge computing scenario in which some applications (or parts of applications) reside on-premises or in the cloud, while others run at the edge. Kubernetes can manage all these resources centrally, eliminating the need to shuffle different management tools for each environment.
  2. Because of its automated management features, Kubernetes can react instantly to changes at the edge. The fact that Kubernetes can deploy new container or application versions quickly is also an advantage for edge computing use cases. If a new location needs to be brought online or an application requires an update, Kubernetes offers the automated deployment features necessary to implement the change quickly.
  3. The fact that Kubernetes is a proven and easy-to-deploy platform makes it attractive for edge computing in the sense that edge remains a relatively new technological niche. Being able to implement edge environments using technology that organizations already know and trust, and that they can set up quickly, beats having to learn and deploy brand-new management solutions that are purpose-built for the edge.

Edge computing flavor with AWS Outpost?

Outposts are AWS’s offering for edge servers: they afford users the ability to run AWS services on-premises through servers and racks that can be purchased from them. The infrastructure is managed by AWS, so there is less burden on the customer to perform updates. They also seamlessly connect to both the customer network and the AWS Region to cultivate a unique hybrid cloud experience. ref.

AWS Outpost Set up with extended VPC & Subnet

The Outpost rack is perfect for accelerating deployment as mentioned above, as it has ample compute power and allows for the integration of container orchestration services like EKS. With a few simple actions, the customer can push updates or new software to edge locations. This acceleration of delivery is achieved by allocating a single, overarching EKS cluster and its control plane in the cloud so that the cluster and all of its nodes are reachable and updateable from anywhere with a decent connection. Outposts are linked to this cluster by creating a self-managed node group and attaching it to the cluster through the control plane. Customers will be able to send deployments to these self-managed node groups running on the on-premises Outposts from the regional cloud, or the Outpost’s node groups will be updated whenever the Outpost reaches out to the cloud (at least once a day).

How can you set it up?

VPC

To begin, we established a VPC with an assortment of private and public subnets. A majority of the subnets exist in the Availability Zones within the AWS Region. We also created a specific subnet to signify the Outpost, which could not overlap with any subnets in the Region. Per the Outpost documentation, the subnet it creates is a private subnet that extends the Availability Zone that it is in.

Separate Outpost Subnet

Cluster

Next, an Amazon EKS cluster was used to run the Kubernetes applications. When creating the EKS cluster, it is important to only specify subnets that run in the AWS Region which will be used for the creation of the Kubernetes control plane. While it is not necessary to specify the Outpost subnets, you should still ensure that its parent Region supports EKS.

Creating Nodes & Pods within Cluster

Self-managed node groups for Edge

AWS Outposts currently support self-managed nodes on an EKS cluster (managed nodes and Fargate are not supported). During the creation of these self-managed node groups, you must specify the subnets that run on your AWS Outpost. With self-managed node groups, you have more control over the server, but you are responsible for managing it. This includes patching and upgrading the AMI and the nodes.

Extension of Self managed node group on Outpost subnet

EKS Managed or Self managed node groups for Cloud

We also provisioned an EKS managed node group on the same cluster, but this time specifying the non-Outpost subnets in the Region.

Finally, in order to create pods in each of these node groups, we selected the groups via node labels and referenced an image stored in Amazon ECR (Elastic Container Registry). This deployed the same application to both the AWS Region and the Outpost.

Think automation with Infrastructure as a Code Implementation (Terraform) same as your cloud operations

We used Terraform to handle building the previously detailed infrastructure. Terraform is a leading infrastructure as code tool that allows users to quickly build huge projects and destroy them by running a few commands, while providing modules that allow for easy setup of the infrastructure. It can be used for configuration management, so we also used it to create the pods to showcase the application deployment.

Following is a Terraform snippet that creates the VPC. Note that the Outpost subnets need the CIDR Blocks that you want to assign to the Outpost as well as the Region that it is in. The use of NAT Gateways in this VPC allows the nodes to connect with each other and call out to the internet but blocks any inbound traffic from the internet, unless ingress security rules explicitly allow that traffic.

Terraform Code

Considerations for using an Outpost

  1. The configuration of the Outpost device itself, which is established on a case-by-case basis to satisfy the needs of the business, and could include networking support, EC2 compute capacity, and EBS or S3 storage capacity.
  2. An additional consideration is the service link, which is an encrypted set of VPN connections that are used whenever the Outpost device communicates with the AWS Region. There are two options for this service link connection, but either way it will enable communication between the Outpost and the AWS Region for both management of the Outpost as well as intra-VPC traffic between the Region and Outpost. The first option is private connectivity, which is closest to what we have implemented in our solution because it establishes the connection using an existing VPC and subnet that we have specified. You would select this option when creating the Outpost in the Outposts console and then the connection would be established after the Outpost is installed. This option minimizes public internet exposure by way of the VPC. Alternatively, the Outpost is able to create the service link VPN back to the AWS Region through public Region connectivity. With this approach, the Outpost needs connectivity to the Region’s public IP ranges, which can be achieved either through the public internet or AWS Direct Connect service.
  3. And a final consideration is the local gateway, which serves two purposes. It provides a target in your VPC route tables for on-premises destined traffic, and performs network address translation for instances that have been assigned addresses from your customer-owned IP pool. You also use the local gateway for communication for internet-bound traffic.

Video Demo

https://lion.box.com/s/c4es0pr0rd5hy6umem4bc0fdzdzhse6a

Conclusion

With a distributed edge solution, businesses can manage their edge environments from a single control plane, even though the infrastructure is distributed across multiple edges giving the centralized control to software distribution & management.

There are offering from specific vendors like HCP Nomad as well as other major cloud providers like Azure (Azure Stacks) & GCP (Anthos) in this space which can be evaluated based on specific business needs. But the concept of deploying a containerized application to the edge will be pretty similar.

Authors

Christian Kowalczyk : Junior Associate Software Development

Shelby Theisen : Junior Associate Software Development

Daniel Pimentel : Junior Associate Software Development

Amit Kumar Sharma : Director Engineering

--

--

Amit Sharma
Engineered @ Publicis Sapient

Certified Cloud Architect and an Expert with 18 yrs. of experience in the design and delivery of cloud-native, cost-effective, high-performance DBT solutions.